Goto

Collaborating Authors

 decision tree learning





Feature Learning for Interpretable, Performant Decision Trees

Neural Information Processing Systems

Points were sampled uniformly in the bands denoted by dashed lines. We posit that these barriers are due, at least in part, to the sensitivity of decision trees to transformations of the input resulting from greedy construction and simple decision rules. Of these, key limitation is the latter; even if we replace greedy construction with a perfect tree learner, simple distributions can nonetheless require an arbitrarily large axis-aligned tree to fit.






Figure 1: Protein with random forest across 140 evaluations with different NN structure for distGP's

Neural Information Processing Systems

Thank you for all the reviewers time and effort. Thank you for your detailed review. Here, the idea is to re-train our model when new data is available. Here we explain our design space (see additional details in Appendix A.3, B and C); (i) Choice of embedding (joint vs Reviewer 3 Thank you for your review, and for comments regarding experiments, please see above. Thank you for your positive comments regarding the quality of the paper.